Myopic Policy Bounds for Information Acquisition POMDPs

نویسندگان

  • Mikko Lauri
  • Nikolay Atanasov
  • George J. Pappas
  • Risto Ritala
چکیده

This paper addresses the problem of optimal control of robotic sensing systems aimed at autonomous information gathering in scenarios such as environmental monitoring, search and rescue, and surveillance and reconnaissance. The information gathering problem is formulated as a partially observable Markov decision process (POMDP) with a reward function that captures uncertainty reduction. Unlike the classical POMDP formulation, the resulting reward structure is nonlinear in the belief state and the traditional approaches do not apply directly. Instead of developing a new approximation algorithm, we show that if attention is restricted to a class of problems with certain structural properties, one can derive (often tight) upper and lower bounds on the optimal policy via an efficient myopic computation. These policy bounds can be applied in conjunction with an online branch-and-bound algorithm to accelerate the computation of the optimal policy. We obtain informative lower and upper policy bounds with low computational effort in a target tracking domain. The performance of branch-and-bounding is demonstrated and compared with exact value iteration.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

POMDP Structural Results for Controlled Sensing

Structural results for POMDPs are important since solving POMDPs numerically are typically intractable. Solving a classical POMDP is known to be PSPACE-complete [40]. Moreover, in controlled sensing problems [16], [26], [10], it is often necessary to use POMDPs that are nonlinear in the belief state in order to model the uncertainty in the state estimate. (For example, the variance of the state...

متن کامل

The complexity of policy evaluation for nite - horizonpartially - observable

A partially-observable Markov decision process (POMDP) is a generalization of a Markov decision process that allows for incomplete information regarding the state of the system. POMDPs are used to model controlled stochastic processes, from health care to manufacturing control processes (see 19] for more examples). We consider several avors of nite-horizon POMDPs. Our results concern the comple...

متن کامل

The Complexity of Policy Evaluation for Finite-Horizon Partially-Observable Markov Decision Processes

A partially-observable Markov decision process (POMDP) is a generalization of a Markov decision process that allows for incomplete information regarding the state of the system. POMDPs are used to model controlled stochastic processes, from health care to manufacturing control processes (see 19] for more examples). We consider several avors of nite-horizon POMDPs. Our results concern the comple...

متن کامل

Implementation Techniques for Solving POMDPs in Personal Assistant Domains

Agents or agent teams deployed to assist humans often face the challenges of monitoring the state of key processes in their environment (including the state of their human users themselves) and making periodic decisions based on such monitoring. POMDPs appear well suited to enable agents to address these challenges, given the uncertain environment and cost of actions, but optimal policy generat...

متن کامل

PEGASUS: A policy search method for large MDPs and POMDPs

We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a model. Our approach is based on the following observation: Any (PO)MDP can be transformed into an “equivalent” POMDP in which all state transitions (given the current state and action) are deterministic. This reduces the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1601.07279  شماره 

صفحات  -

تاریخ انتشار 2016